TangibleNet: Synchronous Network Data Stor ytelling through
Tangible Interactions in Augmented Reality
Kentaro Takahira
The Hong Kong University of
Science and Technology
Hong Kong, China
ktakahira@connect.ust.hk
Wong Kam-Kwai
The Hong Kong University of
Science and Technology
Hong Kong, China
kkwongar@connect.ust.hk
Leni Yang
The Hong Kong University of
Science and Technology
Hong Kong, China
lyangbb@connect.ust.hk
Xian Xu
The Hong Kong University of
Science and Technology
Hong Kong, China
xianxu@ust.hk
Takanori Fujiwara
Linköping University
Norrköping, Sweden
takanori.fujiwara@liu.se
Huamin Qu
The Hong Kong University of
Science and Technology
Hong Kong, China
huamin@cse.ust.hk
Figure 1: TangibleNet is a projector-based AR prototype for live data storytelling using network visualizations. Presenters
interact with node-link diagrams through double-sided magnets and hand gestures. By leveraging the aordance of physical
objects, TangibleNet enables quick, engaging interactions and provides an improvisational presentation experience.
Abstract
Synchronous data-driven storytelling with network visualiza-
tions presents signicant challenges due to the complexity of
real-time manipulation of network components. While existing
research addresses asynchronous scenarios, there is a lack of
eective tools for live presentations. To address this gap, we de-
veloped TangibleNet, a projector-based AR prototype that allows
presenters to interact with node-link diagrams using double-
sided magnets during live presentations. The design process
was informed by interviews with professionals experienced in
synchronous data storytelling and workshops with 14 HCI/VIS
researchers. Insights from the interviews helped identify key
design considerations for integrating physical objects as interac-
tive tools in presentation contexts. The workshops contributed
to the development of a design space mapping user actions to
interaction commands for node-link diagrams. Evaluation with
This work is licensed under a Creative Commons Attribution 4.0 International
License.
CHI ’25, Yokohama, Japan
© 2025 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-1394-1/25/04
https://doi.org/10.1145/3706598.3714265
12 participants conrmed that TangibleNet supports intuitive in-
teractions and enhances presenter autonomy, demonstrating its
eectiveness for synchronous network-based data storytelling.
CCS Concepts
Human-centered computing
Visualization design and
evaluation methods.
Keywords
data-driven storytelling, tangible interaction, augmented reality,
network visualization
ACM Reference Format:
Kentaro Takahira, Wong Kam-Kwai, Leni Yang, Xian Xu, Takanori Fuji-
wara, and Huamin Qu. 2025. TangibleNet: Synchronous Network Data
Storytelling through Tangible Interactions in Augmented Reality. In CHI
Conference on Human Factors in Computing Systems (CHI ’25), April 26–
May 01, 2025, Yokohama, Japan. ACM, New York, NY, USA, 18 pages.
https://doi.org/10.1145/3706598.3714265
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
1 Introduction
Networks are prevalent in data-driven storytelling, modeling phe-
nomena in various elds such as international relations, ecosys-
tems, and social networks [
42
]. Signicant eorts have been
made to improve the communication of network data through
various formats, including infographics [
61
], comics [
2
,
30
,
31
],
and animations [
3
,
54
]. However, research in network-based data
storytelling predominantly focuses on asynchronous scenarios,
where audiences engage with the content individually, without
interaction or live participation from narrators [
32
]. This empha-
sis does not extend to synchronous scenarios, where narrators
guide the audience through the narratives in real-time.
Synchronous data storytelling has become increasingly com-
mon and important in various contexts. It is frequently adopted in
organizational decision-making meetings [
7
,
12
] (e.g., executives
illustrate business strategies with corporate networks) and public
communication [
32
] (e.g., news anchors explain international af-
fairs to audiences). Successful synchronous presentations require
careful planning and coordination, such as timing visual aids to
the narrative pace, complementing the stories with dierent com-
munication modalities (e.g., voice and gestures), and fostering
dynamic interactions between the presenter and the audiences
[
8
,
24
,
39
,
55
]. A prominent example is Hans Rosling’s presenta-
tions, where he synchronizes gestures and body postures, such
as pointing out patterns and tracing trends, with animated visu-
alizations to guide the audiences’ attention [
48
,
49
]. Additionally,
Rosling incorporated physical objects to interact with charts and
engage the audience. For instance, he once used a meter-long
teaching stick in a presentation [
51
], eliciting laughter from the
audience. In his talks about global population growth [
52
,
53
], he
used boxes to represent populations and demonstrated the popu-
lation dynamics by stacking and unstacking the boxes, making
abstract data more tangible and understandable.
Creating dynamic narratives similar to Rosling’s presenta-
tions in traditional slideshow software (e.g., PowerPoint) requires
preparing multiple slides for each state of the charts, along with
carefully coordinated transitions and animations. This process is
time-consuming and lacks support for the gesture and physical
object-based interactions, which can provide advantages like en-
hanced understanding [
11
] and engagement [
24
]. Prior research
has explored innovative authoring tools that enable presenters
to map gestures and postures to control visual elements in aug-
mented presentations [
8
,
16
,
24
,
55
]. However, these tools do not
support network-based data storytelling. In network-based data
storytelling, presenters often use node-link diagrams [
38
] and
need to manipulate various visual components (e.g., nodes, links,
and annotations) and their attributes (e.g., color, position, and
size) at both individual and group levels [
2
]. Relying solely on
gestures to map all possible actions can lead to overly complex
gestures, making them dicult to remember, prone to errors,
and vulnerable to recognition failures. The limitations have been
found in evaluations of systems that rely solely on gesture-based
interactions with visualizations [24].
This work addresses the above two-fold challenge—the lack of
augmented presentation support for networks and the complex-
ity of gesture-based interactions. To overcome these, we propose
an approach where presenters use physical objects to interact
with network visualizations during their presentations. Physical
objects oer diverse aordances, such as direct manipulation and
spatial interaction, that simplify complex tasks through familiar
interactions [
18
,
60
]. For instance, prior studies have used physi-
cal objects like cubes, spheres, and sticks, employing actions such
as ipping, stacking, and combining to streamline complex visual
manipulations [
14
,
25
,
36
,
71
]. Leveraging these aordances can
resolve the complexity of gesture-based interactions and enhance
the intuitiveness of interacting with network visualizations. To
illustrate this concept, we developed TangibleNet, a portable,
projector-based augmented reality (AR) prototype. TangibleNet
enables presenters to interact with network visualizations pro-
jected onto a whiteboard using double-sided magnets.
To inform the system design, we rst interviewed ve profes-
sionals, specically news anchors, whose roles involve narrating
data stories synchronously with visuals. Despite their extensive
expertise in data communication, they have been largely over-
looked in visualization research. While our prototype targets a
broader audience beyond news anchors, we aimed to uncover
insights on eective real-time communication not yet explored
in the literature. We then conducted workshops with 14 HCI/VIS
researchers to explore interaction methods for network visual-
izations using physical objects. These workshops led to a de-
sign space characterized by three key dimensions: 1) Interaction
Command, 2) Primary Modality, and 3) Multiplexity of Physi-
cal Objects. Insights from both studies reinforced the untapped
potential of using physical objects and informed design consid-
erations for synchronous network data storytelling. Building on
these ndings, we developed TangibleNet and evaluated it with
12 participants. Most participants provided positive feedback on
the naturalness of interactions, the engaging delivery process,
and the enhanced sense of autonomy during presentations. We
synthesized insights from the prototype and user feedback to
propose design implications for future systems supporting phys-
ical interactions in synchronous data storytelling. In summary,
our contributions are three-fold:
Novel Scenario: We introduce synchronous network-based
data storytelling, identifying key communication elements
and system requirements through interviews with previously
overlooked data communicators–news anchors (N=5).
Design Space: We propose a framework for interacting with
node-link diagrams using physical objects based on insights
from a workshop with VIS/HCI researchers (N=14).
TangibleNet Prototype: We develop and evaluate Tangi-
bleNet, demonstrating how physical objects enable network
visualization interactions in synchronous storytelling. We con-
rmed its eectiveness through user studies (N=12).
2 Related Works
Our research explores the intersection of synchronous presen-
tation, network data storytelling, and physical objects for data
visualization interactions.
2.1 Synchronous Data-driven Storytelling
Traditionally, data-driven storytelling has focused on producing
asynchronous content [
37
]. However, as data-driven decision-
making becomes increasingly prevalent from casual discussions
to formal presentations, the need for synchronous data-driven
storytelling has grown signicantly [
7
,
16
,
24
,
72
]. This approach
integrates multiple forms of communication, including speech,
gestures, eye gaze, and physical or virtual props, to create more
engaging and adaptable presentations [
13
,
40
,
59
,
64
]. As Kang et
al. [
59
] pointed out, eective presentations require the seamless
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
coordination of gestures, language, and props. Many systems
have been developed to support this integration.
Saquib et al. [
55
] investigated body-driven graphics, where
pre-designed visuals are mapped to specic body parts and ad-
justed in response to the presenter’s movements. RealityTalk [
39
]
utilizes a keyword-matching system to link spoken words with
graphical elements. This system displays predened graphics in
real-time when specic keywords are recognized in speech, and
these graphics can then be manipulated through hand gestures.
Elastica [
8
] tackles the issue of recognition errors and presen-
ter mistakes by enabling the dynamic adjustment of predened
graphic animations using both speech and gestures. This sys-
tem allows presenters to dene visual eects dynamically by
combining body movements and spoken input.
The eective communication of data-driven insights requires
purpose-built systems, as general augmented presentation tools
are inadequate for this task [
24
]. In an interview study, Brehmer et
al. [
7
] found that synchronous data storytelling can range from
interactive, jam session-style presentations with exible data
visualization interactions to recital-style presentations with min-
imal audience engagement. The system requirements dier sig-
nicantly across these formats, leading the authors to propose
prototypes for various needs. In their early exploration, Lee et
al. [
34
] introduced SketchStory, a system for dynamic chart cre-
ation, annotation, and ltering via touch and pen on a wall display.
However, it requires pre-registering visual orders to minimize
mode switching and interaction complexity, limiting the ex-
ibility essential for synchronous data storytelling [
1
,
7
]. Addi-
tionally, while focused on visualization creation, its support for
ne-grained manipulation is limited, making it challenging to
synchronize speech with visuals closely. Hall et al. [
24
] intro-
duced Augmented Chironomia, a system designed for remote
presentations that enables gesture-based control of visualizations,
supported by an authoring tool [
16
]. This system overlays the
presenter’s webcam feed with interactive charts that can be ma-
nipulated in real-time through gestures. While this approach
works well for these types of visualizations, node-link diagrams
pose distinct challenges. Interacting with node-link diagrams re-
quires managing a diverse set of visual components (e.g., nodes,
links, and annotations) and their attributes (e.g., color, position,
and size) at both individual and group levels [
31
,
35
]. Relying
solely on body gestures for all interactions can lead to overly
complex gesture mappings, making them dicult to memorize,
error-prone, and prone to misinterpretation. While combining
speech and hand gestures has been proposed as a solution to ex-
pand the range of commands, using imperative voice commands
can feel awkward and potentially distract the audience [62].
2.2 Storytelling with Networks
Networks play a pivotal role in data-driven storytelling because
of their structural exibility and the clear representation they pro-
vide through node-link diagrams. They are widely used in various
media, such as journalism, data videos, and data comics, to repre-
sent a wide range of topics, including interpersonal relationships,
international relations, and ecosystems [
2
,
6
,
17
,
31
,
42
,
43
]. Sig-
nicant eorts have been made to develop tools that eectively
communicate network data stories.
Spritzer et al. [
61
] developed a system to enhance node-link di-
agrams by allowing users to modify visual attributes and layouts,
facilitating the creation of more communicative visualizations.
Similarly, Romat et al. [
47
] proposed an interactive system that
allows users to adjust visual attributes of multivariate network
visualizations. Complementing these eorts, computational meth-
ods have also been introduced to support the eective communi-
cation of insights derived from network data analysis. Fujiwara et
al. [
20
] presented a system that automatically composes concise
visual summaries of network analysis provenance, aiding in the
sharing and recalling of analysis processes and results. Chen et
al. [
9
] proposed Calliope-Net, a system designed to automatically
extract and annotate salient topological features in node-link
diagrams to produce visually appealing fact sheets of networks.
While these tools enhance the aesthetics and clarity and eec-
tively summarize key insights, they do not address the dynamic
visual transitions important for storytelling [2].
To eectively narrate changes in network data storytelling,
data comics have emerged as a compelling format. Bach et al. [
2
]
identied key design factors for representing dynamic networks
in data comics, including various graph elements, component
types, visual representations, and narrative patterns. Building
on this foundation, Kim et al. [
31
] developed DataToon, an in-
teractive authoring tool for creating network data stories in
data comics. This tool allows users to issue various commands
through pen and multi-touch inputs via mode changes. Addi-
tionally, Kim et al. [
30
] developed a semi-automatic authoring
tool designed explicitly for crafting data comics. These studies
oer valuable insights into the key elements of network data
storytelling, including essential network components, types of
changes, visual encoding techniques, and narrative styles. How-
ever, their focus is on developing tools and content for asynchro-
nous consumption, which diers from the design of spontaneous,
easily executed interactions needed for synchronous storytelling.
Consequently, live presentations often depend on static screen-
shots of network visualizations arranged in slide decks or rely on
pre-determined sequences to navigate various components (e.g.,
XMind [
70
]). These approaches are not only labor-intensive and
dicult to update but also limit the presenter’s ability to engage
in spontaneous, real-time interactions with the visualizations.
This limitation restricts the presenter’s capacity to dynamically
adapt to audience needs, reducing the opportunity to deliver a
personalized, engaging, and interactive storytelling experience.
2.3 Interacting with Visualizations Using
Physical Objects
Humans excel at sensing and manipulating physical objects, mak-
ing them an eective medium for intuitive, low-eort interac-
tions [
27
,
60
,
68
]. Physical objects also oer spatial multiplex-
ing, allowing users to control multiple virtual elements through
physical arrangements [
18
]. Consequently, physical interactions
have attracted growing interest, particularly in augmented re-
ality [
21
,
26
]. Several studies have explored using simple geo-
metric objects (e.g., cubes, cylinders, spheres) for virtual interac-
tion [
14
,
28
,
36
,
71
], leveraging actions like rotating, relocating,
stacking, and tapping [
25
,
44
,
67
]. These simple shapes are ver-
satile and applicable to a wide range of interaction designs. By
aligning with users’ real-world experiences, these interactions
help reduce learning eort and enhance usability [60].
Recent research has explored the use of physical objects in
data analytics, emphasizing their potential to perform interaction
commands (e.g., selecting, ltering, highlighting) across various
visualization types and contexts. Ens et al. [
15
] introduced Uplift,
which integrates tangible widgets with AR for energy analysis,
using physical models and the bespoke slider to support tasks
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
Table 1: Proles of the ve interviewed news anchors.
ID
Gender
Experience Areas of Specialization Example Data
A1 Male 10 years
Economic Aairs; Social Issues;
Criminal Justice
Corporate Dynamics
Electoral Inuence Analysis
Accident Statistics
A2 Male 8 years
Sports; Social Trends; Crime Re-
porting
Sports Coverage
Accident Statistics
Crime Relationship Mapping
A3 Female 10 years
Public Administration; Eco-
nomic Aairs
Governance Metrics (e.g Administrative Statistics)
Economic Indicators (e.g Business Bankruptcy Rates
and Job Vacancy Ratios)
A4 Male 11 years
Criminal Justice; Political Anal-
ysis; Economic Reporting
Electoral Metrics (e.g Vote Counts, Inuence Analysis)
Crime Network
Political Aliation Analysis
A5 Male 10 years
Sports; Educational Reporting;
Social Trends
Sports Metrics (e.g Sports Results and Statistics)
Educational Statistics
Relations of Prominent Figures
such as annotation and ltering in collaborative settings. Satri-
adi et al. [
57
] explored tangible globes for geospatial visualiza-
tion, leveraging aordances like rotation and tapping. Satriadi et
al. [
56
] also proposed the Active Proxy Dashboard, enabling in-
teraction with physical scale models for selection and ltering.
Cordeil et al. [
10
] developed Embodied Axis, a system where tan-
gible arms represent data axes. Users can spatially combine these
axes and manipulate levers for selection and authoring. Suzuki et
al. [
65
] explored shape-changing swarm interfaces, where users
physically manipulate robots to construct visualizations.
Additionally, prior studies explored the use of data physical-
ization to connect physical and digital representations, allowing
users to interact with digital data through tangible means [
5
].
Veldhuis et al. [
69
] developed a tangible scatterplot where users
can manipulate tokens as data points to explore correlations,
particularly in educational contexts. Taher et al. [
66
] examined
how people interact with physical bar charts for common tasks
like ltering and annotation. Le Goc et al. [
33
] introduced self-
propelled micro-robots for interactive data manipulation. Addi-
tionally, Bae et al. [
4
] employed physicalized networks to facilitate
basic interactions with node-link diagrams, such as highlighting
or ltering. Jansen and Dragicevic [
29
] proposed a conceptual
framework for beyond-desktop interaction, which integrates tan-
gible and embodied interactions into visualization models.
Researchers have also examined everyday objects for interac-
tion. Tong et al. [
67
] proposed a design space for paper-based
interactions, using aordances of papers such as folding, ipping,
and tilting for various visualization tasks like ltering, zooming,
and authoring. He et al. [
25
] explored cubes of dierent sizes
for interacting with spatiotemporal data in mixed reality. While
these studies address a range of visualization tasks and provide
valuable insights into the rich aordances of physical objects, few
specically focus on network visualizations and do not consider
their use in live data storytelling.
In summary, previous research has yet to explore the potential
of physical interactions in synchronous data-driven storytelling,
especially for network visualizations. While the importance of
physical objects in storytelling has been recognized [
46
] and
demonstrated in compelling cases [
50
,
52
], their application in
network data storytelling remains largely unexamined.
3 Formative Study
We rst interviewed experienced news anchors to gain insights
into their eective data communication practices and their views
on using physical objects for synchronous data storytelling. These
interviews helped identify the system requirements to support
eective communication and the design considerations for inte-
grating physical objects in presentation settings. Despite their
extensive expertise in conveying data to broad audiences in real-
time, to the best of our knowledge, news anchors have been
largely overlooked by the visualization community. Although
our approach of using physical objects to interact with network
visualizations targets a broader audience, we aimed to uncover
insights not yet covered in the existing literature [7, 8, 24, 34].
3.1 Interviews with News Anchors
We conducted semi-structured interviews with ve news anchors
(A1-A5) who regularly engage in data storytelling using visuals.
The participants (one female, four males) had 8 and 11 years of
experience and were recruited via snowballing sampling, start-
ing with a personal acquaintance of one author (Table 1). They
work at TV stations in Japan, representing two dierent institu-
tions and based at dierent broadcasting locations. At the start
of each interview, we outlined the project and obtained consent
to record audio and take notes. We assured them that their identi-
ties would remain anonymous, their aliated institutions would
not be disclosed, and the data would be used exclusively for aca-
demic research purposes. We asked about their current practices
in explaining data with visual aids, challenges, key aspects of
eective communication, and the nature of network data stories
they presented. We also explored their interests, expectations,
and concerns about using physical objects in presentations. Each
session lasted 90–120 minutes.
3.2 Data Analysis
The semi-structured interview results were saved as audio record-
ings and the interviewer’s notes. These audio recordings were
transcribed into text and then combined with the notes. We used
an open coding procedure to analyze this data. One of the authors
reviewed all records, assigning codes to data segments represent-
ing a single idea or concept. The similar codes were then grouped,
and each group was assigned a category. Three of the authors
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
Figure 2: Illustrations of news anchors at work: (a) A3 explaining the rise in COVID-19 cases in Tokyo during a news
segment. (b) A scene of a news anchor explaining the relationships between politicians using a physical board, as described
by A4 and A5. The visuals are revealed in sync with the narration by removing stickers.
reviewed the groupings and categories, discussing their appro-
priateness until they agreed on the nal coding results.
3.3 Result
The interview results conrmed the importance of interacting
with visualizations in presentations, highlighted the potential of
using physical objects for interaction, and provided insights into
the challenges and design requirements of manipulating network
visualizations with physical objects.
3.3.1 Current Practices of Interaction with Visualization in Pre-
sentations. All participants emphasized the importance of in-
teraction with visuals during presentations for eective com-
munication. Common practices included touch interaction via
multi-touch displays and manually notifying studio sta to pro-
ceed with slides, as noted by A1: “I just give them a nod to switch”
(Fig. 2-a). Additionally, participants also used physical boards cov-
ered with removable stickers, which could be progressively taken
o to reveal new information (Fig. 2-b). All participants agreed
on the eectiveness of gradually revealing the story through in-
teraction, which aligns with previous research ndings [
7
]. This
approach was particularly valued for its exibility, especially
in integrating real-time data (e.g., disaster updates) and accom-
modating ad-hoc changes in presentation content and timing.
Additionally, A2-A4 further highlighted how body movement
enhances communication by adding a performative aspect to the
presentation. For example, A3 explained, “Using a pointer and
body gestures to trace an upward trend in COVID case numbers
really drives the point home” (Fig. 2-a). Similarly, A2 emphasized,
“Removing stickers from a board to progressively reveal information
engages the audience like uncovering mysteries” (Fig. 2-b).
3.3.2 Presentation with Networks. Participants explained the
prevalence of presenting network data due to its structural ver-
satility. They worked with various network data represented
in node-link diagrams, such as relationships between political
gures, international relations, inter-company networks, family
trees, and crime connections. A4 noted, “Network diagrams let
you map complex narratives, like showing alliances and conicts
between political gures and their parties during the last election.
These networks were typically composed of nodes, links, groups,
and captions (e.g., text or images) featuring up to 10 nodes, often
represented with icons or images. Participants adjusted visuals
or revealed new elements as the story progressed. Unlike text or
standard charts like bar or line graphs, node-link diagrams lack
a set narrative direction (e.g., top-left to bottom-right), making
it essential for presenters to guide the audience’s focus. Partici-
pants shared storytelling strategies, “starting with an overview,
then zooming in on a specic node to explain a particular relation-
ship, like showing intra-party dynamics after discussing inter-party
relations, or “comparing multiple networks, such as contrasting
political alliances, as noted by A4.
3.3.3 Opportunities for Using Physical Objects. Participants ex-
pressed an interest in using physical objects during presentations
and suggested that familiar, tangible objects would enable them
to focus on storytelling rather than system operations. The idea
of using physical objects to represent network nodes was well-
received. A2 noted, “I’ve used magnets to represent players in sports
broadcasts, and this idea feels natural and intuitive. Physical in-
teractions viewed as a way to enhance audience engagement,
transforming the presentation into a form of performance art.
As A3 put it, “Using props like TV shows do could make abstract
data more understandable to the broader audience. A1 appreciated
the simplicity of using familiar objects, stating, “Familiar objects
are easier to handle than having to remember complex gestures or
touch commands.
3.3.4 Current Challenges. Interviews revealed the following two
challenges in presenting network data live:
C1
Limited Interactions: Presenting network data interac-
tively, rather than simply navigating slides, poses challenges.
These challenges arise from the complexity of network com-
ponents and the need to design interactions to control each
component while aligning the control with the narration. As
a result, existing presentations often only rely on static diagrams
and pre-sequenced slides, which restrict direct interaction with
the visualizations. While physical boards with stickers that can
be progressively removed create a visually engaging eect, A2
remarked that “It takes a long time to ask the art department
to set things up, and it’s impossible to make adjustments mid-
presentation. This lack of direct interaction makes it harder for
presenters to guide the audience’s focus through the interactions
eectively. This limitation is critical since node-link diagrams
lack a clear reading order.
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
C2
Interaction Disrupting Narrative Flow: A1, A3, and A5
noted challenges with multi-touch display methods during live
presentations. Common issues include locating menus on large
screens—particularly when presenting from the side—and the
multiple steps required to switch presentation modes. A3 ex-
plained, “With our touch-display setup, we have to click a menu at
the screen’s edge to select visuals, which takes time and disrupts
the ow. These interactions require prior practice and remain
prone to errors, even after practice. Errors further interrupt pre-
sentations, requiring mid-presentation corrections that disrupt
the narrative ow and detract from audience engagement.
3.4 Design Requirements
Drawing from the current practices, identied challenges, and
expert insights from the interviews, we dened the design re-
quirements for a prototype supporting synchronous network data
storytelling using physical objects as the interaction medium.
R1
Flexible Interactions with Network Components: The
system should allow presenters to manipulate network compo-
nents directly rather than relying on transitions between prede-
ned states (e.g., slideshows). Direct manipulation of network
components is essential for eectively guiding the audience’s at-
tention and providing relevant context, especially given the lack
of apparent reading orders in node-link diagrams. Allowing this
exibility supports improvisational storytelling, as emphasized
in prior research [
7
,
24
], and enables presenters to revisit the con-
tent, skip components, or dynamically adjust the layout. Such
adaptability enriches the audience’s interactive and personalized
experience while adapting to time constraints and other pre-
sentation conditions. These needs, identied by all participants,
directly address the challenges outlined in C1 .
R2
Simple and Intuitive Interactions: The system should
facilitate intuitive interactions with low cognitive loads, allowing
presenters to focus on storytelling, aligning with prior studies’
considerations [
34
]. Simplifying interactions reduces the likeli-
hood of mistakes and anxiety, ensuring ease of use during presen-
tations. This approach makes the system accessible to a broader
range of users with less need for prior knowledge. Interactions
should be spontaneous and seamless to maintain the presenta-
tion’s ow. Additionally, the system should avoid strict adherence
to predened body movements, which necessitate extensive re-
hearsals and increase the risk of errors, as discussed by Cao et
al. [
8
]. These requirements, mentioned by three participants,
address the challenges identied in C2 .
R3
Natural and Engaging Body Movements: The system
should ensure that presenters’ movements appear natural and
engaging from the audience’s perspective. Participants noted
that gestures should function not only as interaction triggers but
also as enhancements to presentation quality. Awkward gestures
can distract the audience from the content. A1 observed, “Using
a hand gesture to advance slides might seem magical or feel out
of place to an audience unfamiliar with the system’s logic, draw-
ing attention away from the content and end up being distracting.
Participants suggested that interactions should fulll necessary
storytelling functions while remaining appealing to the audience.
A5 emphasized that gradually revealing information by peeling
away stickers guides the audience’s focus and advances the sto-
ryline eectively. Similarly, A3 highlighted that tracing a trend
in a bar chart with a pointer, while a line highlights the trend,
can also eectively direct attention and advance the story. These
insights, emphasized by all participants, highlight the importance
of smooth, natural, and engaging interactions from the audience’s
perspective, addressing the challenges outlined in C2 .
R4
Manageable Physical Objects: In discussions with partic-
ipants, we identied key characteristics of physical objects as
an interaction medium in live presentations. The system should
limit both the number and types of objects to reduce the learning
curve and simplify handling. The objects should be small enough
to hold four or ve in one hand, easy to grasp, and preferably
thick and compact rather than card-like. Additionally, reusability
is also essential to lower costs and reduce environmental im-
pact. These factors contribute to the ecient, manageable, and
sustainable use of physical objects in presentations.
4 Soliciting Interactions
Our formative study highlighted the importance of interactions
with network visualizations and identied key design require-
ments for eective interaction. Physical objects emerged as a
promising approach, enabling presenters to manage network
components easily and eectively. Building on the insights from
this study, we conducted workshops to investigate concrete map-
pings between user actions and interaction commands. Here,
we dene user actions as controls on physical objects and ges-
tures (e.g., moving a physical object) and interaction commands
as manipulations on network visualizations (e.g., repositioning
a network node). Our goal is to dene a comprehensive design
space that captures the diverse possibilities of these mappings.
4.1 Interactions for Network Storytelling
We rst aimed to identify interaction commands (e.g., show a
node or hide a link) applicable to node-link diagrams in synchro-
nous network-based data storytelling. We focus our research on
the communication phase within the four stages of data story-
telling proposed by Li et al. [
37
]: analysis, planning, implementa-
tion, and communication. While we recognize that the network
interaction commands required at each stage are not distinctly
dierent, their purposes and emphasis dier. In data analysis,
commands such as annotating nodes support exploration by help-
ing users track exploration and generate insights. In contrast,
during presentations, the same commands can serve to emphasize
key nodes and reinforce the narrative. Similarly, repositioning
nodes is essential for untangling complex networks to under-
stand their topologies, but in presentations, it is primarily used to
guide audience attention and clarify relationships. To ensure our
interaction commands align with real-time storytelling needs,
We compiled a set of fundamental interaction commands deemed
most critical to enhancing the communication phase, while this
list is not exhaustive. To this end, we rst reviewed existing liter-
ature on network interaction tasks [
19
,
35
,
63
]. We then rened
this list by examining existing network data storytelling author-
ing tools [
2
,
31
,
47
], examples from various domains [
6
], and
network story examples shared from news anchors (see Table 1).
The nal set of interaction commands was specically tailored
for real-time interactions in network data storytelling.
4.2 Selection of Physical Objects
Various physical objects can serve as interaction mediums for
visualizations [
14
,
25
,
36
,
67
]. Establishing a common reference
is essential for grounding our discussion and exploring specic
physical actions. We chose round double-sided 4×4cm magnets
in our study (Fig. 3-a). These magnets meet the requirements
for ease of handling and cost-eciency identied by the news
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
Figure 3: Workshop: Participants used double-sided mag-
nets (A) to explore user actions for visualization commands
and recorded their ndings in worksheets (B). They re-
ferred to the illustration of visualization commands as
needed (C). The ideas were then shared and discusse d
among the participants.
anchors. Moreover, they oer versatile aordances, such as stack-
ing and ipping. Magnets are widely used to represent networks
in contexts like classrooms and sports strategy. While their phys-
ical properties aord unique interactions, many of these, such
as stacking, ipping, and rotating, can be generalized to other
objects with similar characteristics. This suggests our ndings
could extend to a broader range of physical objects.
4.3 Ideation Workshop
4.3.1 Participants. In our workshop, 14 participants, comprising
students and professors specializing in Visualization (VIS) and
Human-Computer Interaction (HCI), took part. HCI/VIS experts
were selected for their interaction design expertise, following
methodologies from similar studies [
25
,
67
]. Participants, aged
23 to 38, included ten males and four females, as disclosed. We
conducted ve sessions with two to three participants each.
4.3.2 Materials. We provided participants with the set of in-
teraction commands (e.g., Show/Hide Node) and illustrations for
each command to aid comprehension (Fig. 3-b,c). These mate-
rials were accessible to participants throughout the workshop.
Additionally, participants received twelve 4×4 cm double-sided
magnets (Fig. 3-a) and were encouraged to experiment with them
to explore potential user actions for the given interaction com-
mands.In our preliminary workshop, we observed that partici-
pants occasionally confused interaction commands (e.g., Show
Node) with user actions (e.g., Attach a magnet). To address this,
we introduced a few example mappings (see supplemental ma-
terials). This demonstration also served as a priming technique
[41], a method commonly used in previous studies [25, 67].
4.3.3 Procedure. Each 70-minute session was divided into three
phases: brieng, individual brainstorming, and group discus-
sion. In the brieng, participants rst completed a consent form
and demographic questionnaire. We then provided a 10-minute
overview of the research background and workshop objectives,
emphasizing interaction in synchronous data storytelling. We
explained each interaction command and demonstrated sample
mappings, making it clear that these examples were only starting
points and encouraging participants to think creatively beyond
them. During individual brainstorming (25–30 min), participants
were tasked with developing ways to perform each interaction
command using the provided magnets and recording their ideas
in the worksheets. Participants were encouraged to disregard
technical limitations. In the group discussion (20–30 min), par-
ticipants shared their ideas by demonstrating them with the
magnets and discussed possible extensions. We encouraged them
to explain the reasoning behind each interaction. The entire idea-
sharing process was recorded and documented in the instructor’s
notes for further analysis.
4.3.4 Data Analysis. One author organized the workshop out-
comes, identifying a total of 130 unique commands and their asso-
ciated actions. Actions were initially coded using a reference set
derived from prior studies [
25
,
44
,
67
]. This reference set included
actions: Attach [
25
], Tap [
25
,
44
], Draw [
25
,
44
], Pinch [
25
,
44
],
Stack [
25
], Collide [
25
], Point [
67
], Bring Closer [
25
], Cover [
25
,
67
], Flip [
67
], Rotate [
25
,
67
], and Relocate [
25
,
44
,
67
]. For actions
beyond this initial set, an open-coding approach was applied,
with codes adapted to reect better actions involving magnets
(e.g., Relocate was redened as Slide). Coding was rened itera-
tively through discussions among three additional authors until
consensus was achieved.
4.4 Design Space
Based on our analysis of the workshop results, we summarized a
design space that maps user actions to interaction commands for
network visualizations (Fig. 4). This design space is structured
around three dimensions: 1) Interaction Command, 2) Primary
Modality of User Actions, and 3) Multiplexity of Physical Objects.
4.4.1 Dimension 1: Interaction Command. This dimension de-
nes interaction commands for manipulating network visual-
izations during live presentations, categorized by the network
components they target (see Fig. 4 row headers). These com-
mands enable the manipulation of visualizations in real-time,
supporting interactive and improvised storytelling.
Presenters can show or hide nodes as the narrative unfolds,
such as when introducing new characters or removing them (i.e.,
Show/Hide Node). Nodes can also be repositioned to reduce clutter
or better represent relationships (i.e., Reposition Node). Adjust-
ing node size can signify changes in attributes (i.e., Scale Node)
while altering a node’s visuals (e.g., shape or color) can indicate
changes in its state (i.e., Change Node State). Links require simi-
lar dynamic manipulation, such as toggling their visibility (i.e.,
Show/Hide Link), changing their state (i.e., Change Link Type), or
adjusting their width (i.e., Scale Link) to match the narrative. The
direction of links is critical in illustrating relationships between
nodes and can also be modied during presentations (i.e., Change
Link Direction). The dynamic grouping of nodes is another essen-
tial aspect of network data storytelling (i.e., Show/Extend Group).
For example, nodes associated with a particular theme, such as
political parties or product categories, are often grouped with
visual boundaries [
31
]. Nodes can be added to or removed from
these groups as the narrative progresses (i.e., Hide/Shrink Group).
Annotations provide contextual information, helping the audi-
ence understand the story i.e., Show/Hide Annotation). Similarly,
child networks, representing lower-layer networks within nodes,
can be revealed (i.e., Show/Hide Child Network). In an interna-
tional relations network, for example, a presenter might zoom
into a country to explore its local government relationships, as
suggested by news anchors during our interviews.
4.4.2 Dimension 2: Primary Modality of User Actions. This di-
mension encompasses various actions that users can perform
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
Figure 4: The design space maps user actions with double-sided magnets to interaction commands for network visualizations.
The horizontal axis represents user actions, the vertical axis shows interaction commands, and each cell displays mappings.
Icons in the cells indicate magnet multiplexity, as illustrated in the top-right legend. Mappings implemented in our
prototype are marked with an asterisk.
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
(see Fig. 4 column headers) with the magnets. The actions are
further classied into four categories: Movement-based In-
teractions involve direct physical manipulation of the magnet,
where users alter the magnet’s position, orientation, or motion.
Actions in this category include Attach, Detach, Slide, Rotate, Wig-
gle, and Flip (Table 2-A). Proximity-based Interactions depend
on the spatial relationships between magnets or between mag-
nets and hands, focusing on their relative distances. Examples
include Bring Closer, Pull Apart, Cover, Point and Enclose (Table 2-
B). Combination-based Interactions involve the simultaneous
or sequential use of multiple magnets, such as Stack, Unstack,
Collide, and Swap (Table 2-C). Gesture-based Interactions refer
to gestures performed by hands in relation to the physical objects,
taking into account their spatial relationships. Examples include
Tap, Draw, Trace, Pinch, and Hold (Table 2-D). By categorizing
user actions in this way, we focus on single, discrete actions that
users can perform, which serves as the foundation for mapping
to interaction commands.
4.4.3 Dimension 3: Multiplexity of Physical Objects. This dimen-
sion classies actions as involving a Single Object or Multiple
Objects, with the latter further divided into
Simultaneous
and
Sequential
Actions (see Fig. 4 icons). It claries the magnet mul-
tiplexity and the temporal coordination required when using
multiple magnets
Single Object: These actions involve a single magnet, typically
manipulated with one hand, such as moving, ipping, or rotating
a magnet. They are straightforward and do not require coordina-
tion with other magnets.
Multiple Objects: These actions involve multiple magnets. While
these actions introduce complexity, they also enhance expressive-
ness. Their outcome often depends on the temporal coordination
of multiple magnets and typically targets links or groups.
Simultaneous Actions
: These actions involve multiple magnets
simultaneously or have outcomes unaected by order. For exam-
ple, ipping two magnets simultaneously to change a link type
or bringing two magnets closer to form a group. While they do
not require temporal coordination, they often demand the use of
both hands, adding a layer of physical coordination.
Sequential Actions
: These actions require performing operations
on multiple magnets in a specic sequence. For instance, tap-
ping magnets sequentially can dene link directionality. The
outcome depends on the action order, requiring users to follow
a predened sequence to achieve the desired eect. Sequential
actions oer greater expressiveness than simultaneous ones, as
they allow for specifying directions.
5
Prototyping and Rening Interaction Design
We iteratively developed the prototype based on design require-
ments from the formative study and design space from work-
shops. From the broad design space, we mapped visualization
commands to user actions, ensuring they met design require-
ments and incorporated critical considerations for eective com-
munication. Throughout this iterative process, we consistently
received feedback from the two news anchors (A1 and A5) who
participated in the initial study, helping to rene our design.
Interaction from the Presenter’s Perspective: Presenters
need simple, low-cognitive-load interactions to maintain a smooth
presentation ow
R2
. Complex, multi-step interactions increase
operational diculty and disrupt the delivery. Feedback from
news anchors highlighted that interactions requiring presenters
to navigate multiple steps, such as activating a toolbox, were
undesirable for maintaining a smooth presentation. Similarly,
assigning the same user action to dierent interaction commands
through mode selection complicated the process. Consequently,
we aimed to avoid including mode choices and ensured that each
interaction command was clearly assigned to distinct user ac-
tions. To eectively guide audience attention
R1
, we aligned
interaction commands with the presenter’s deictic gestures. For
example, we implemented a gesture where pointing at a magnet
triggers the display of an annotation, directing the audience’s
focus to the relevant area. This design is based on practices used
by news anchors, who point at visuals while cueing studio sta
to display corresponding content.
Interaction from the Audience’s Perspective: The news
anchors raised concerns about several mappings that appeared
unnatural from the audience’s perspective
R3
. For instance, we
initially implemented the Show/Hide Link feature by bringing
two nodes closer together, which seemed intuitive for presenters
as a toggle action
R2
. However, the anchors found this mislead-
ing to the audience bringing nodes closer suggests a stronger
relationship, not disappearing or weakening link. This mismatch
may evoke confusion, as the presenter’s gestures did not produce
the expected visual change for the audience. Another example of
problematic mappings was using a pinching gesture on a magnet
to reveal a child network. Pinching was too subtle for the audi-
ence, making transitions feel abrupt. This feedback highlighted
the need for interactions that align with both the presenter’s
intent and the audience’s expectations.
Dynamic Mapping Magnets and Nodes: A network data story
typically involves multiple nodes, making it challenging to link
magnets to specic nodes. A simple solution is to assign magnets
to nodes based on a predened sequence, but this limits exibil-
ity for improvised storytelling
R1
. Alternatively, a one-to-one
mapping between magnets and nodes can be established before
the presentation. This method allows for a more exible order
in which nodes are shown, but it requires users to memorize the
assignments. Adding distinguishable features, such as stickers, to
the magnets reduces reusability and increases preparation time,
contradicting the design requirement
R4
. To address these issues,
we designated a registration area where presenters place magnets
to assign them to nodes dynamically. This approach reduces the
presenters’ cognitive load of memorizing the assignments while
allowing exible storytelling.
Data Registration before Presentaton: We found that assign-
ing a magnet to every node overwhelmed users and disrupted
the presentation ow, as manually displaying all nodes was time-
consuming. To address this, we pre-registered the network data,
following practices from previous studies [
24
,
34
]. Discussions
with news anchors revealed that network data stories often in-
clude nodes of varying importance, with some serving as back-
ground information for key nodes. We categorized the nodes
into primary and secondary, where magnets are only assigned
to primary nodes, while secondary nodes appear automatically
alongside primary nodes. Although this limited the direct manip-
ulation of secondary nodes, it reduced the number of magnets
needed, streamlining the presentation without compromising the
story’s ow
R4
. Similarly, links can also be considered secondary
and follow the appearance of the primary nodes to enhance nar-
rative continuity and minimize tedious interactions.
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
Table 2: User actions by primary modality: We categorized user actions into four key dimensions: Movement-based
Interactions, Proximity-based Interactions, Combination-based Interactions, and Gestural Interactions.
(A) Movement-based Interactions:
Attach: Placing a magnet onto the interaction space.
Detach: The opposite of attaching; removing the mag-
net from the interaction space.
Slide: Moving the magnet from one location to another
without detaching it.
Rotate: Turning the magnet around its axis, changing
its orientation without altering its location.
Wiggle: Slightly shifting the magnet vertically or hor-
izontally while it stays attached to the interaction sur-
face.
Flip: Inverting the magnet by turning it over to reveal
the opposite surface.
(B) Proximity-based Interactions:
Bring Closer: Moving multiple magnets toward each
other without making contact.
Pull Apart: Moving multiple magnets away from each
other.
Cover: Positioning a magnet or hand over the magnet
without making direct contact to obscure it from view.
Point: Pointing near the magnet with a nger without
making direct contact.
Enclose: Using hands to form a boundary around the
proximity of magnets.
(C) Combination-based Interactions:
Stack: Placing one magnet on top of another.
Unstack: The opposite of stacking; removing a magnet
from the stack.
Collide: Deliberately causing two magnets to impact
each other.
Swap: Exchanging the positions of two magnets.
(D) Gestural Interactions:
Tap: Quickly touching and releasing the surface of the
magnet with a nger.
Draw: Using a nger to trace a specic path or trajec-
tory on the surface of the magnet.
Trace: Using a nger to follow the contour or outline
of an object with one or more magnets.
Pinch: Using the thumb and index nger to establish
two contact points on the magnet’s surface, then bring-
ing those points closer together or pulling them apart.
Hold: Maintaining continuous contact with the mag-
net’s surface using a nger for a sustained period.
Accessible Physical Interactions: To enhance accessibility,
we chose a Computer Vision (CV)–based approach over elec-
tronic circuits. Using ArUco markers
1
, the system captures each
marker’s identity, position, and orientation while recognizing
hand gestures (e.g., touch and hold). This approach allows easy
deployment in diverse environments
R4
, such as classrooms and
oces, without requiring electronics expertise [
23
] or expensive
wall-sized touch displays (e.g., smart boards). However, CV–based
techniques are constrained by occlusions and glitches to subtle
changes. This choice limits the inclusion of certain physical ac-
tions, such as wiggling and colliding), which were not included
in the prototype.
6 TangibleNet
Building on the prototyping process, we present our prototype,
TangibleNet. TangibleNet consists of a webcam, double-sided
magnets with ArUco markers, a projector, and a whiteboard
(Fig. 5). Users interact with visualizations by manipulating the
1
https://github.com/fdcl-gwu/aruco-markers
magnets on the whiteboard and performing user actions. The we-
bcam captures these actions, which are processed to manipulate
network visualizations. The visualizations are projected onto the
whiteboard, aligned with the magnets’ positions.
6.1 User Interface
The user interface consists of two areas: registration and story-
board (Fig. 5). The registration area displays primary nodes from
the pre-loaded network data, along with highlighted spots for
placing magnets (Fig. 5-b). Placing a magnet dynamically reg-
isters it to the corresponding node. The storyboard area shows
the network visualization, allowing presenters to manipulate it
using magnets and hand gestures (Fig. 5-a).
6.2 Implementation Details
TangibleNet is a browser-based application developed in JavaScript,
with a backend powered by Node.js
2
. The system captures video
streams from a webcam to detect the position and rotation of
2
https://nodejs.org/
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
Figure 5: System Overview: TangibleNet recognizes the presenter’s interactions with magnets and hand gestures on the
whiteboard via a webcam and projects the resulting visualizations onto the whiteboard using a projector. The projected
user interface is divided into two areas: (A) storyboard area and (B) registration area.
magnets using the OpenCV-based ArUco marker detection li-
brary, JS-ARUCO2
3
. Additionally, hand gestures are recognized
using MediaPipe
4
, which tracks the presenter’s index nger to
determine when it touches a magnet and measures the dura-
tion of contact by monitoring the continuity of this interaction.
These interactions are reected in the network visualization ren-
dered by d3.js
5
, which is projected onto a whiteboard. To prevent
overlapping ArUco markers and nodes that could hinder marker
recognition accuracy, the nodes are projected near the magnets
rather than directly on top of them.
6.3 Interaction on TangibleNet
TangibleNet implements the interaction commands from our
design space, mapping each to corresponding user actions (see
Fig. 6). As noted in Section 5, CV constraints limited the range
of user actions, making it dicult to implement all commands
unambiguously due to recognition errors in hand gestures and
marker detection. To address this, we developed multiple com-
mand sets containing interaction commands that are unambigu-
ously mapped to user actions. Users can select their command
set before their presentation.
6.4 Use Scenarios
We illustrate the applicability of our design space through var-
ious storytelling scenarios, showcasing TangibleNet in action.
Supplementary videos provide visual demonstrations.
6.4.1 Case 1: Shiing Alliances during World War I. Inspired by
prior studies [
2
,
30
,
31
], this case examines evolving alliances
in World War I. Countries are represented as nodes, and their
alliances and hostilities as links.
Use Scenario: Alex, a middle school history teacher, uses
TangibleNet to teach her students about the evolving alliances
in World War I. First, she picks up several magnets and places
3
https://github.com/damianofalcioni/js-aruco2/
4
https://ai.google.dev/edge/mediapipe/
5
https://d3js.org
them in the registration area to register magnets as nodes rep-
resenting countries. She then slides the magnets representing
Germany and Austria to the storyboard area, triggering the Show
Node command (Fig. 6-1). As she explains the Dual Alliance, she
taps the Germany and Austria magnets sequentially, triggering
the Show Link command (Fig. 6-5) and guiding student focus.
She then explains how Italy, due to territorial disputes, aligned
with Germany. She moves Italy’s magnet closer to Germany’s,
triggering the Show/Extend Group command to group the coun-
tries (Fig. 6-9). As tensions escalate, she describes Germany’s
military expansion. She rotates its magnet clockwise to trigger
the Scale Node command, scaling up its node to emphasize its
rising power (Fig. 6-2). When a student asks about Germany’s
military strength, she holds her nger on German’s magnet to
trigger the Show Annotation command to provide more details
(Fig. 6-11). This interactivity allows her to exibly adapt the story
to align with her students’ interests. Continuing, she points at
Serbia’s magnet to display an annotation about the assassination
of Archduke Franz Ferdinand, explaining its role in triggering
the war. As links multiply and cause visual clutter, she slides
magnets to trigger the Reposition Nodes command, adjusting
the layout for clarity.
6.4.2 Case 2: Explaining Political Alliances During an Election. In-
spired by discussions with news anchors and their real-world ex-
amples, this case examines shifting political relationships among
politicians, parties, and corporations during an election campaign.
Nodes represent these entities, while links depict endorsements,
alliances, and funding connections.
Use Scenario: Emma, a news anchor, prepares for a live broad-
cast on election dynamics in the upcoming election. She attaches
the magnets representing Party A, Candidate X, and major cor-
porations to the storyboard area, triggering the Show Node com-
mand. To illustrate aliations, she taps Candidate X and Party
A sequentially, triggering the Show Link command to display
the link indicating that Candidate X belongs to Party A. She
then moves Party A and a corporation closer, triggering the
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
Figure 6: Supported interactions include: (1) Attaching or detaching magnets to show or hide nodes; (2) Rotating a magnet
to scale a node; (3) Flipping a magnet to change a node’s state; (4) Stacking a magnet to highlight a node; (5) Tapping
magnets sequentially to show or hide links; (6) Holding magnets simultaneously to change link types; (7) Tapping the
source magnet rst and then the target magnet to change link direction; (8) Holding one magnet and rotating the other to
scale the link; (9) Bringing magnets closer to show or extend a group; (10) Covering a magnet to hide or shrink a group; (11)
Pointing a magnet to display an annotation; (12) Rotating a magnet 360° to show or hide a child network.
Show/Extend Group command to indicate an endorsement or
funding relationship. As the campaign progresses, she highlights
substantial funding from Party A to Candidate X. She holds Can-
didate X’s magnet and rotates the corporation’s magnet outward,
activating the Scale Link command to visually emphasize the
strength of nancial connections by thickening the link between
Candidate X and the corporation (Fig. 6-8). To explore Candidate
X’s support network, she rotates Candidate X’s magnet 360 de-
grees clockwise, triggering the Show Child Network command
(Fig. 6-12), revealing a child network showing relationships with
lobbyists, interest groups, and grassroots organizations support-
ing the candidate. At a pivotal point in the campaign, she explains
that a scandal involving Party A has caused several corporations
to withdraw their political alliances with the party. She covers
their magnets to trigger the Shrink/Hide Group command, re-
moving them from the group associated with Party A (Fig. 6-10).
Additionally, she emphasizes a change in Candidate X’s polit-
ical stance due to the scandal. She ips Candidate X’s magnet
to trigger the Change Node State command (Fig. 6-3), which
changes the node’s visual representation to reect a policy shift.
Finally, she holds both the magnets of Candidate X and Party
A simultaneously, triggering the Change Link Typ e command
(Fig. 6-6), modifying the link’s color to signify their shift from
allies to opponents visually.
6.4.3 Case 3: Optimizing a Supply Chain Network. This case
visualizes a supply chain network to identify bottlenecks and
improve logistics. Nodes represent factories, distribution centers,
and retailers, while links depict the ow of goods.
Use Scenario: Linda, a logistics manager, presents strategies
to improve supply chain eciency. She begins by attaching mag-
nets representing factories, distribution centers, and retail stores
to the whiteboard, triggering the Show Node command. Links
representing existing transportation routes appear automatically.
Narrating a specic distribution center experiencing reduced ca-
pacity and acting as a bottleneck, Linda stacks a widget magnet
on its node, activating the Highlight Node command (Fig. 6-4).
She then holds the bottlenecked center’s magnet and rotates the
aected retail store’s magnet inward, triggering the Scale Link
command. The link between them becomes thinner, visually in-
dicating the reduced ow of goods on that route. To propose an
alternative, Linda taps the magnets of other distribution centers
sequentially, activating the Change Link Direction command
(Fig. 6-7). This adjustment redirects the ow of goods from the
overloaded center to others, providing a visual representation of
the proposed optimization.
7 Evaluation
We conducted a user study to evaluate the presentation experi-
ence of network-based data storytelling with TangibleNet, gather-
ing feedback on its utility, usability, and learnability. Additionally,
we compared it to participants’ familiar presentation environ-
ments and explored potential applications. While authoring sup-
port is important, this study does not address it.
7.1 Participants
Our study included 12 participants (P1-P12) recruited from the
local community, comprising eight males, three females, and
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
Figure 7: Evaluation setting: (A) Participants presenting the story using TangibleNet. (B) Selected snapshots of the target
network data story, progressing sequentially from 1 to 6. (C) Ratings of TangibleNet on a 1–5 scale for the naturalness of
the interactions, ease of learning, and overall engagement, with 1 being the least favorable and 5 the most favorable.
one non-binary individual. Participants held a range of occu-
pations, including university students, public ocers, software
engineers, and television directors, with ages ranging from 20 to
34. Most participants had experience delivering live presentations
that involved data, and all were familiar with node-link diagram
representations. Six participants had experience using these dia-
grams in their presentations. Participants’ familiarity with AR
systems varied widely, from no prior knowledge to developing
AR applications.
7.2 Apparatus
We used a Logitech HD 1080p webcam with a 60fps frame rate.
The projector used was an OWNKNEW model with 27,000 lu-
mens and a resolution of 1920x1080p, projecting onto a white-
board measuring approximately 70cm by 120cm. The system was
run on a MacBook Pro (2021) with an Apple M1 Pro chip and
16GB of RAM. The magnets used in the setup had a diameter of
4cm and a thickness of 1.5cm.
7.3 Tasks and Procedure
Participants used TangibleNet to present a network data story
(Fig. 7-b). We chose the shifting alliances before and during World
War I (Fig. 7-a) as the target network story (see supplemental ma-
terials for the complete storyline). This scenario was considered
appropriate because it encompasses a wide range of interaction
commands and has been frequently studied in network data sto-
rytelling research [
2
,
30
,
31
]. The story incorporates commands:
Show/Hide Node, Reposition Node, Scale Node, Change Node State,
Show/Hide Link, Show/Extend Group, Hide/Shrink Group, Show
Annotation, and Show/Hide Child Network.
Each session began with participants completing a consent
form and demographic questionnaire. They were informed they
could withdraw at any time. Instructors then explained the exper-
imental setup, explaining that participants would present in front
of a whiteboard with projected visualizations. The instructors
introduced the basic concepts of network data storytelling, includ-
ing node-link diagrams and interaction commands. Participants
were also given a handout that outlined the target network data
story to help them familiarize themselves with the narrative (see
supplemental materials). Following this, the instructors demon-
strated TangibleNet, showcasing the interaction techniques avail-
able in the system. Participants were then asked to present the
story using TangibleNet. During the presentation, the instruc-
tor posed questions (e.g., “Could you show that change again?”)
and requested unscripted manipulations to demonstrate Tangi-
bleNet’s exibility. We told participants they could ask questions
about the interactions throughout their presentations if needed.
Following the presentation, participants rated TangibleNet on
a 1-5 scale based on the naturalness of the interactions, ease of
learning, and overall engagement. These ratings informed a sub-
sequent semi-structured interview, where participants elaborated
on their experiences with the system. They were also asked to
compare TangibleNet with their own familiar presentation en-
vironments, providing insights into areas for improvement and
potential application scenarios. Each user study session lasted
approximately 50 to 60 minutes, and participants received 6 USD
as compensation.
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
7.4 Observation and Feedback
All participants completed their presentations using TangibleNet,
interacting with the network visualization throughout. They
provided subjective feedback, using their familiar workows as
an informal baseline where applicable. Below, we discuss the
insights gathered from the study.
Easy-to-Learn and Straightforward Interactions: Most par-
ticipants were able to interact with the network visualization
immediately after a single demonstration by the instructor. This
ease of use is reected in the responses, where 9 out of 12 par-
ticipants agreed that the system was easy to learn, and none
disagreed (Fig. 7-c). Only P9 and P10, who had no prior experi-
ence with AR, required additional clarication. This underscores
TangibleNet’s low learning curve. P1 remarked, “I can’t think of
a better way to do this–the interactions with TangibleNet aligned
with the visual eects. The magnets are physical, and the visualiza-
tion is digital, but it felt seamless, unlike using a mouse or keyboard
for presentations. Moreover, P5 observed that “Changing the node
size by rotating the magnet is just like adjusting the volume on my
stereo. Similarly, P4 commented, “Rotating the magnet to reveal
the child network feels like zooming in with a DSLR camera lens
to see more detail. I enjoy the sensation of diving into the child
network. This suggests that TangibleNet eectively leverages
the aordances of magnets and users’ familiarity with physical
objects. However, P11 remarked, “It feels inconsistent that physical
objects represent nodes while the links remain non-physical. This
inconsistency in aligning physical and digital elements aects
the perceived naturalness of the interaction for some users.
Some participants highlighted the drawbacks of body-based
interactions. P1 and P11 mentioned experiencing arm fatigue
during extended use and concerns about blocking the projected
image, potentially aecting the audience’s experience. Addition-
ally, P1, P11, and P12 preferred directly manipulating projected
visuals by hand for certain interaction commands. As P12 re-
marked, “When I want to group three nodes at once, I’d rather
circle them with my nger on a touch display. Similarly, P1 stated,
“It’s easier to sketch a link directly on the whiteboard. P11 further
added, “Not being able to interact with the visuals by touch makes
some actions feel disconnected and less straightforward. These
observations suggest that integrating TangibleNet with touch
displays could enhance both the intuitiveness and exibility of
its interaction design.
Mixed Reactions in Aligning Deictic Gestures and Interac-
tion Commands: The design of aligning interaction commands
with deictic gestures received mixed feedback. P9 appreciated
this approach, stating, “Using gestures to display nodes and links
helps to highlight points of interest. They also remarked, “Com-
pared to using a clicker, where I have to manage both the clicker and
pointing at the same time, this system makes it easier since I can do
both in one motion. In contrast, P3 found it frustrating, explain-
ing “I was pointing at a node to direct the audience’s attention, and
suddenly the annotation popped up. I didn’t want that to happen.
Reecting on the experience, they added, “It’s confusing when
the same gesture is used for guiding the audience and controlling
the visuals. It makes the software harder to use. This comment
aligns with discussions on the conict between operational and
aective gestures, as noted by Hall et al. [
24
]. The issue is further
exacerbated by the limitations of the current CV-based system
through webcam, which lacks depth detection. This can result in
pointing gestures being misinterpreted as interactions with the
magnets, leading to unintended actions.
Benets of Physical Interaction: Most participants high-
lighted that using physical objects made interactions smoother
and easier for presenters. P12 remarked, “This tool lets me multi-
task more eciently than a touchscreen because I can use multiple
magnets at the same time. P2 added, “The magnets are thick
enough to grab easily, even from the sides of the whiteboard. It’s
easier to handle physical objects than digital interfaces. These
comments highlight how tangibility can make interactions more
easy to use and ecient in data storytelling [
27
,
60
]. In addition,
several participants appreciated the magnetic force. P2 reected,
“The magnets stayed in place during rotations and slides, which
made everything feel smooth and satisfying. Others mentioned
that the magnetic force added a tactile element that made the
interactions more enjoyable. The familiarity of magnets further
contributed to their ease of use. P3 shared a nostalgic observa-
tion: “Using magnets in the presentation reminded me of arranging
character magnets on a wall as a kid to represent relationships.
These comments underline the value of physicality and its ability
to make interactions easy to use and engaging.
Enhancing the Presenter’s Sense of Autonomy: Participants
frequently highlighted that TangibleNet provided a sense of au-
tonomy during presentations, making the experience engaging.
This was reected in the responses, with 12 out of 14 partici-
pants describing their experience with TangibleNet as engaging
(Fig. 7-c). P8 explained, “(In contrast to slideshow) I liked being
able to control when nodes and links appeared and deciding which
elements to show. It gave me a sense of control over the presenta-
tion. P2 also commented, “The magnets’ weight and the way
they stick to the board made it feel very hands-on. It reminded me
of driving a manual car. I feel more connected to what I’m doing.
These remarks highlight how TangibleNet’s physical interactions
allowed presenters to actively construct network visualizations
rather than simply advancing slides, contributing to a sense of
autonomy and engagement.
Comparison with Existing Workows: The instructors asked
participants about the tools they typically use for data storytelling
and how TangibleNet compares in terms of capabilities. Partici-
pants mainly use PowerPoint or Keynote slide decks, printed dia-
grams, and sketches for their presentations. They identied two
key advantages of TangibleNet: exibility and dynamic layout.
Many participants appreciated how TangibleNet enables more
improvisational presentations. P12 noted, “Unlike slideshows with
xed sequences, this system lets me freely interact with the visu-
alization and adapt the narrative during the presentation. This
exibility was particularly benecial in dynamic environments,
such as business settings where priorities and time constraints
can shift rapidly. Additionally, the dynamic layout aorded by
TangibleNet allowed participants to easily reposition nodes, op-
timizing space usage and bringing key components into focus
as the story evolved. P12 further highlighted, “I don’t have to
pre-plan how the network visualization will change in the story.
Observations conrmed that participants frequently adjusted the
layout to reduce visual clutter as the network expanded.
However, participants noted some limitations. Several pre-
ferred alternative implementations of certain commands and
requested advanced features to better support their storytelling
needs. P5 mentioned, “Flipping magnets one by one to change node
states doesn’t work for scenarios like soccer, where players often
have multiple roles. The current implementation, which only
allows sequential state changes, felt restrictive in such contexts.
P7 commented, “I wish I could change the state of all nodes in
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
a category at the same time. It’d be helpful for things like show-
ing character changes in a TV drama map after a major event.
This feature was seen as particularly useful in scenarios where
multiple node states shift simultaneously.
Feedback on Setup: Many participants appreciated Tangi-
bleNet’s accessible setup, which uses standard equipment, mak-
ing it easy to install in homes or oces. However, they noted
challenges with the CV-based approach. P7, P8, and P10 struggled
with interactions a few times when their bodies unintentionally
blocked the markers from the webcam. Participants were some-
times unaware of the camera’s role in tracking. P7 remarked, “I
didn’t realize my body was blocking the markers from the webcam
until the interaction didn’t work. Similarly, P8 noted, “When I
focused on using the magnets and telling the story, I forgot the
camera was even there. These comments reect a lack of aware-
ness about the camera’s role in tracking interactions, particularly
among participants like P7, P8, and P10, who had no prior expe-
rience with AR. Notably, P7, P8, and P10 had no experience with
AR. These challenges suggest that users unfamiliar with AR and
CV may require additional guidance to fully utilize TangibleNet.
8 Discussion
We summarize key lessons and design implications for developing
presentation tools with physical objects for synchronous data
storytelling. We also discuss potential improvements.
8.1 Design Implications
Expressiveness of User Actions: To ensure a smooth interac-
tion experience, user actions should oer sucient parameters for
intended visualization commands [
67
]. For example, reposition-
ing one node oers two-dimensional continuous inputs, while
ipping one node involves binary input. The dimension of object
multiplexity in our design space allows the same user actions to
generate dierent visualization commands based on timing and
order, adding an extra layer of expressiveness. By incorporating
physical objects, our prototype enhances the expressiveness of
body language and gestures. Currently, our prototype utilizes
double-sided magnets as the primary interactive tool for network
visualizations. However, other physical objects designed particu-
larly for storytelling (e.g., Hans Rosling’s meter-long teaching
stick [51]) could progress stories in creative ways.
Usability from the Presenter’s Perspective: Usability in syn-
chronous storytelling relies on how easy the interactions are for
the presenter, allowing them to concentrate on delivering the
narrative smoothly. This is particularly important compared to
pre-recorded presentations, as synchronous storytelling requires
quick thinking and improvisation [
7
]. A key advantage of our
prototype is its use of physical objects for interaction, leverag-
ing their inherent familiarity and simplicity. Most participants
quickly learned how to interact with these objects by drawing
on their real-world experiences and found the mapping between
actions and interaction commands straightforward. Furthermore,
interacting with physical objects enhances user autonomy and
makes the storytelling experience more engaging. This increased
engagement through physical interaction aligns with the existing
research ndings [
45
,
69
]. Additionally, allowing presenters to
incrementally build visuals based on their judgment during the
presentation further reinforces their sense of autonomy. We rec-
ommend that future designs incorporate physical interactions to
simplify user interactions and enhance both presenter autonomy
and engagement.
Engagement from the Audience’s Perspective: While some
interactions feel intuitive to the presenter, they might appear
awkward or confusing to the audience. Neglecting this balance
can divert attention from content and reduce engagement, a key
concern frequently emphasized by news anchors trained in ef-
fective communication. Considering the audience’s perspective
on interactions, rather than focusing solely on the presenter, is
especially critical in synchronous data storytelling. This require-
ment is distinct from interaction design for data analysis, where
the observer’s viewpoint typically receives minimal attention. As
we learned from insights provided by news anchors, designing
interaction commands for visualizations that leverage principles
of engaging gestures in public communication is a necessary
direction for future research, aiming to create interactions that
are both functional for presenters and captivating for audiences.
Balancing Expressiveness, Usability, and Audience Engage-
ment: Balancing expressiveness, usability, and audience engage-
ment was challenging during prototyping. For example, as dis-
cussed in Section 5, the toggle action for hiding links by bringing
magnets closer together is straightforward for the presenter but
appears counterintuitive to the audience because the links dis-
appear as the nodes move closer. This approach only meets the
criteria for expressiveness and usability. Similarly, while the ip
action for changing node states is easy to perform and does
not seem counterintuitive, it lacks the expressiveness needed to
handle multiple states, addressing only usability and audience
engagement. Future designs should consider integrating phys-
ical object-based interactions with multimodal inputs, such as
speech or gaze, and a broader range of gestures to address all
three criteria better.
Automation vs. Manual Control: We recommend dieren-
tiating between elements the presenter controls manually and
automated ones, depending on their signicance in the narrative.
Our ndings suggest that manual control enhances the presen-
ter’s sense of autonomy and engagement, bringing exibility
to the storytelling. However, relying solely on manual control
can be time-consuming and burdensome and lead to fatigue, par-
ticularly in lengthy or complex presentations. To address this,
we implemented rules where specic nodes and links appear or
disappear based on changes in other components. The balance
between manual and automatic control should be guided by the
importance of the various story elements. Additionally, incor-
porating live speech recognition for controlling visuals during
the presenter’s narration could help reduce their workload and
facilitate a smoother storytelling experience, as demonstrated in
prior synchronous storytelling research [39].
8.2 Limitations and Future Work
Expanding Network Storytelling: Our interaction commands
do not fully encompass those required to address a broader range
of narrative patterns [
58
] and visual representations [
2
,
31
]. For
example, as P7 noted, our prototype lacks a command for lter-
ing nodes by attributes. To enhance TangibleNet’s versatility, fu-
ture work can explore multimodal inputs (e.g., speech, eye gaze),
touch-enabled displays like smartboards, and various physical
objects. Several participants expressed interest in these technolo-
gies to expand interaction capabilities.
Lack of Authoring Support: This study does not address au-
thoring support. While TangibleNet reduces the eort required
for storytelling by enabling dynamic story construction through
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
interaction, the initial network data setup still requires man-
ual coding. Future work will develop a web-based interface to
streamline this process, supporting authoring tasks such as den-
ing node states, adding annotations, previewing narratives, and
conguring visual attributes like node appearance and anima-
tions. Its design will be guided by existing research on authoring
systems for node-link diagrams for communications [47, 61].
TangibleNet for Remote Audiences: While TangibleNet was
primarily designed for co-located storytelling with a whiteboard
and projector, its core functionality—interacting with network
data via physical objects—is also valuable for remote use. Tangi-
bleNet can be adapted in two ways: (1) by streaming a recorded
presentation, or (2) by integrating presenter tracking, similar
to features in videotelephony software like Zoom and Google
Meet [
22
,
73
]. This approach captures the presenter’s body move-
ments and physical interactions, embedding them into the video
feed sent to remote users (e.g., [
8
,
24
,
39
]). While this method
can enhance visual clarity by eliminating the need to record a
projected image, it may introduce potential errors or delays due
to body detection and visual integration.
Exploring Other Physical Objects: We chose double-sided
magnets for their accessibility, familiarity, and ease of use. As
their shape and properties resemble other objects, our design
space may extend to similar physical items. However, other ob-
jects may oer unique aordances that enhance network visu-
alization interactions, such as physical representations of links.
Additionally, the semantic role of physical objects in storytelling
is worth exploring. In theatre, props help convey narratives, sug-
gesting that integrating such objects with visualizations could
open new interaction possibilities for future research.
Exploring Diverse Storytelling Practices: Our formative study
included ve news anchors, experts in data communication–
often-overlooked in visualization research. However, our evalua-
tion did not involve news anchors or test the system in diverse
settings like TV programs or classrooms. These contexts present
unique challenges, such as varying audience sizes, audience in-
teractions, and integration with other media (e.g., videos, music).
Future research should evaluate the system in real-world environ-
ments with communication experts to address these challenges.
9 Conclusion
This study explored interaction design with physical objects
for network visualization in synchronous data storytelling. We
interviewed ve news anchors to identify key communication
factors and the role of physical objects in presentations. We then
conducted workshops with 14 VIS/HCI researchers to examine
how physical objects can interact with network visualizations.
These insights informed a three-dimensional design space: 1)
interaction commands, 2) primary modality, and 3) multiplexity
of physical objects. We developed TangibleNet, a projector-based
AR prototype that allows presenters to interact with node-link di-
agrams using double-sided magnets. Our evaluation with 12 par-
ticipants showed that TangibleNet supports interactions that are
easy to learn, enhances presenter autonomy and eectively sup-
ports synchronous data storytelling. We hope this work inspires
future research on physical objects in data-driven storytelling.
Acknowledgments
We would like to thank the reviewers. This work is partially
supported by the HK RGC GRF grant 16214623 and by the Knut
and Alice Wallenberg Foundation through Grant KAW 2019.0024.
References
[1]
Fereshteh Amini, Matthew Brehmer, Gordon Bolduan, Christina Elmer, and
Benjamin Wiederkehr. 2018. Evaluating data-driven stories and storytelling
tools. In Data-driven storytelling. CRC Press, Boca Raton, FL, 249–286. https:
//doi.org/10.1201/9781315281575-12
[2]
Benjamin Bach, Natalie Kerracher, Kyle Wm. Hall, Sheelagh Carpendale, Jessie
Kennedy, and Nathalie Henry Riche. 2016. Telling Stories about Dynamic
Networks with Graph Comics. In Proceedings of the 2016 CHI Conference on
Human Factors in Computing Systems (CHI ’16). Association for Computing
Machinery, New York, NY, USA, 3670–3682. https://doi.org/10.1145/2858036.
2858387
[3]
Benjamin Bach, Emmanuel Pietriga, and Jean-Daniel Fekete. 2014. GraphDi-
aries: Animated Transitions andTemporal Navigation for Dynamic Networks.
IEEE Transactions on Visualization and Computer Graphics 20, 5 (2014), 740–754.
https://doi.org/10.1109/TVCG.2013.254
[4]
S. Sandra Bae, Takanori Fujiwara, Anders Ynnerman, Ellen Yi-Luen Do,
Michael L. Rivera, and Danielle Albers Szar. 2024. A Computational De-
sign Pipeline to Fabricate Sensing Network Physicalizations. IEEE Transac-
tions on Visualization and Computer Graphics 30, 1 (2024), 913–923. https:
//doi.org/10.1109/TVCG.2023.3327198
[5]
S. Sandra Bae, Clement Zheng, Mary Etta West, Ellen Yi-Luen Do, Samuel
Huron, and Danielle Albers Szar. 2022. Making Data Tangible: A Cross-
disciplinary Design Space for Data Physicalization. In Proceedings of the 2022
CHI Conference on Human Factors in Computing Systems (New Orleans, LA,
USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA,
Article 81, 18 pages. https://doi.org/10.1145/3491102.3501939
[6]
Liliana Bounegru, Tommaso Venturini, Jonathan Gray, and Mathieu Jacomy.
2017. Narrating Networks. Digital Journalism 5, 6 (2017), 699–730. https:
//doi.org/10.1080/21670811.2016.1186497
[7]
Matthew Brehmer and Robert Kosara. 2022. From Jam Session to Recital:
Synchronous Communication and Collaboration Around Data in Organiza-
tions. IEEE Transactions on Visualization and Computer Graphics 28, 1 (2022),
1139–1149. https://doi.org/10.1109/TVCG.2021.3114760
[8]
Yining Cao, Rubaiat Habib Kazi, Li-Yi Wei, Deepali Aneja, and Haijun Xia.
2024. Elastica: Adaptive Live Augmented Presentations with Elastic Mappings
Across Modalities. In Proceedings of the CHI Conference on Human Factors in
Computing Systems (Honolulu, HI, USA) (CHI ’24). Association for Computing
Machinery, New York, NY, USA, Article 599, 19 pages. https://doi.org/10.
1145/3613904.3642725
[9]
Qing Chen, Nan Chen, Wei Shuai, Guande Wu, Zhe Xu, Hanghang Tong, and
Nan Cao. 2024. Calliope-Net: Automatic Generation of Graph Data Facts
via Annotated Node-Link Diagrams. IEEE Transactions on Visualization and
Computer Graphics 30, 1 (2024), 562–572. https://doi.org/10.1109/TVCG.2023.
3326925
[10]
Maxime Cordeil, Benjamin Bach, Andrew Cunningham, Bastian Montoya,
Ross T. Smith, Bruce H. Thomas, and Tim Dwyer. 2020. Embodied Axes:
Tangible, Actuated Interaction for 3D Augmented Reality Data Spaces. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12.
https://doi.org/10.1145/3313831.3376613
[11]
Nicole Dargue, Naomi Sweller, and Michael P Jones. 2019. When our hands
help us understand: A meta-analysis into the eects of gesture on comprehen-
sion. Psychological Bulletin 145, 8 (2019), 765.
[12]
Evanthia Dimara, Harry Zhang, Melanie Tory, and Steven Franconeri. 2022.
The Unmet Data Visualization Needs of Decision Makers Within Organiza-
tions. IEEE Transactions on Visualization and Computer Graphics 28, 12 (2022),
4101–4112. https://doi.org/10.1109/TVCG.2021.3074023
[13]
Randi Alexandra Engle. 2000. Toward a theory of multimodal communica-
tion: Combining speech, gestures, diagrams, and demonstrations in instructional
explanations. Stanford University, Stanford, CA, USA.
[14]
David Englmeier, Isabel Schönewald, Andreas Butz, and Tobias Höllerer. 2019.
Sphere in Hand: Exploring Tangible Interaction with Immersive Spherical
Visualizations. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces
(VR). IEEE, Osaka, Japan, 912–913. https://doi.org/10.1109/VR.2019.8797887
[15]
Barrett Ens, Sarah Goodwin, Arnaud Prouzeau, Fraser Anderson, Florence Y.
Wang, Samuel Gratzl, Zac Lucarelli, Brendan Moyle, Jim Smiley, and Tim
Dwyer. 2021. Uplift: A Tangible and Immersive Tabletop System for Casual Col-
laborative Visual Analytics. IEEE Transactions on Visualization and Computer
Graphics 27, 2 (2021), 1193–1203. https://doi.org/10.1109/TVCG.2020.3030334
[16]
Temiloluwa Paul Femi-Gege, Matthew Brehmer, and Jian Zhao. 2024. VisCon-
ductor: Aect-Varying Widgets for Animated Data Storytelling in Gesture-
Aware Augmented Video Presentation. Proc. ACM Hum.-Comput. Interact. 8,
ISS, Article 531 (Oct. 2024), 22 pages. https://doi.org/10.1145/3698131
[17]
Velitchko Filipov, Alessio Arleo, and Silvia Miksch. 2023. Are We There
Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies.
Computer Graphics Forum 42, 6 (2023), e14794. https://doi.org/10.1111/cgf.
14794
[18]
George W. Fitzmaurice and William Buxton. 1997. An empirical evaluation
of graspable user interfaces: towards specialized, space-multiplexed input. In
Proceedings of the ACM SIGCHI Conference on Human Factors in Computing
Systems (CHI ’97). Association for Computing Machinery, New York, NY, USA,
43–50. https://doi.org/10.1145/258549.258578
[19]
Mathias Frisch, Jens Heydekorn, and Raimund Dachselt. 2009. Investigating
multi-touch and pen gestures for diagram editing on interactive surfaces. In
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality CHI ’25, April 26–May 01, 2025, Yokohama, Japan
Proceedings of the ACM International Conference on Interactive Tabletops and
Surfaces (ITS ’09). Association for Computing Machinery, New York, NY, USA,
149–156. https://doi.org/10.1145/1731903.1731933
[20]
Takanori Fujiwara, Tarik Crnovrsanin, and Kwan-Liu Ma. 2018. Concise
provenance of interactive network analysis. Visual Informatics 2, 4 (2018),
213–224. https://doi.org/10.1016/j.visinf.2018.12.002
[21]
Weilun Gong, Stephanie Santosa, Tovi Grossman, Michael Glueck, Daniel
Clarke, and Frances Lai. 2023. Aordance-Based and User-Dened Gestures
for Spatial Tangible Interaction. In Proceedings of the 2023 ACM Designing
Interactive Systems Conference (DIS ’23). Association for Computing Machinery,
New York, NY, USA, 1500–1514. https://doi.org/10.1145/3563657.3596032
[22]
Google LLC. 2024. Google Meet: Secure Video Meetings. https://meet.google.
com. Accessed: 2024-11-29.
[23]
Peter Gyory, S. Sandra Bae, Ruhan Yang, Ellen Yi-Luen Do, and Clement Zheng.
2023. Marking Material Interactions with Computer Vision. In Proceedings
of the 2023 CHI Conference on Human Factors in Computing Systems (CHI
’23). Association for Computing Machinery, New York, NY, USA, Article 478,
17 pages. https://doi.org/10.1145/3544548.3580643
[24]
Brian D. Hall, Lyn Bartram, and Matthew Brehmer. 2022. Augmented Chi-
ronomia for Presenting Data to Remote Audiences. In Proceedings of the 35th
Annual ACM Symposium on User Interface Software and Technology (Bend, OR,
USA) (UIST ’22). Association for Computing Machinery, New York, NY, USA,
Article 18, 14 pages. https://doi.org/10.1145/3526113.3545614
[25]
Shuqi He, Haonan Yao, Luyan Jiang, Kaiwen Li, Nan Xiang, Yue Li, Hai-
Ning Liang, and Lingyun Yu. 2024. Data Cubes in Hand: A Design Space of
Tangible Cubes for Visualizing 3D Spatio-Temporal Data in Mixed Reality. In
Proceedings of the CHI Conference on Human Factors in Computing Systems
(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article
209, 21 pages. https://doi.org/10.1145/3613904.3642740
[26]
Anuruddha Hettiarachchi and Daniel Wigdor. 2016. Annexing Reality: En-
abling Opportunistic Use of Everyday Objects as Tangible Proxies in Aug-
mented Reality. In Proceedings of the 2016 CHI Conference on Human Fac-
tors in Computing Systems (San Jose, California, USA) (CHI ’16). Associa-
tion for Computing Machinery, New York, NY, USA, 1957–1967. https:
//doi.org/10.1145/2858036.2858134
[27]
Hiroshi Ishii and Brygg Ullmer. 1997. Tangible bits: towards seamless in-
terfaces between people, bits and atoms. In Proceedings of the ACM SIGCHI
Conference on Human Factors in Computing Systems (CHI ’97). Association
for Computing Machinery, New York, NY, USA, 234–241. https://doi.org/10.
1145/258549.258715
[28]
Bret Jackson, Tung Yuen Lau, David Schroeder, Kimani C. Toussaint, and
Daniel F. Keefe. 2013. A Lightweight Tangible 3D Interface for Interactive
Visualization of Thin Fiber Structures. IEEE Transactions on Visualization and
Computer Graphics 19, 12 (2013), 2802–2809. https://doi.org/10.1109/TVCG.
2013.121
[29]
Yvonne Jansen and Pierre Dragicevic. 2013. An Interaction Model for Visualiza-
tions Beyond The Desktop. IEEE Transactions on Visualization and Computer
Graphics 19, 12 (2013), 2396–2405. https://doi.org/10.1109/TVCG.2013.134
[30]
Joohee Kim, Hyunwook Lee, Duc M. Nguyen, Minjeong Shin, Bum Chul Kwon,
Sungahn Ko, and Niklas Elmqvist. 2025. DG Comics: Semi-Automatically Au-
thoring Graph Comics for Dynamic Graphs. IEEE Transactions on Visualization
and Computer Graphics 31, 1 (2025), 973–983. https://doi.org/10.1109/TVCG.
2024.3456340
[31]
Nam Wook Kim, Nathalie Henry Riche, Benjamin Bach, Guanpeng Xu,
Matthew Brehmer, Ken Hinckley, Michel Pahud, Haijun Xia, Michael J. McGuf-
n, and Hanspeter Pster. 2019. DataToon: Drawing Dynamic Network
Comics With Pen + Touch Interaction. In Proceedings of the 2019 CHI Con-
ference on Human Factors in Computing Systems (Glasgow, Scotland Uk)
(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12.
https://doi.org/10.1145/3290605.3300335
[32]
Robert Kosara and Jock Mackinlay. 2013. Storytelling: The Next Step for
Visualization. Computer 46, 5 (2013), 44–50. https://doi.org/10.1109/MC.2013.
36
[33] Mathieu Le Goc, Charles Perin, Sean Follmer, Jean-Daniel Fekete, and Pierre
Dragicevic. 2019. Dynamic Composite Data Physicalization Using Wheeled
Micro-Robots. IEEE Transactions on Visualization and Computer Graphics 25,
1 (Jan. 2019), 737–747. https://doi.org/10.1109/TVCG.2018.2865159
[34]
Bongshin Lee, Rubaiat Habib Kazi, and Greg Smith. 2013. SketchStory: Telling
More Engaging Stories with Data through Freeform Sketching. IEEE Trans-
actions on Visualization and Computer Graphics 19, 12 (2013), 2416–2425.
https://doi.org/10.1109/TVCG.2013.191
[35]
Bongshin Lee, Catherine Plaisant, Cynthia Sims Parr, Jean-Daniel Fekete, and
Nathalie Henry. 2006. Task taxonomy for graph visualization. In Proceedings
of the 2006 AVI Workshop on BEyond Time and Errors: Novel Evaluation Meth-
ods for Information Visualization (Venice, Italy) (BELIV ’06). Association for
Computing Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/
1168149.1168168
[36]
Kevin Lefeuvre, Soeren Totzauer, Michael Storz, Albrecht Kurze, Andreas
Bischof, and Arne Berger. 2018. Bricks, Blocks, Boxes, Cubes, and Dice: On
the Role of Cubic Shapes for the Design of Tangible Interactive Devices. In
Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18).
Association for Computing Machinery, New York, NY, USA, 485–496. https:
//doi.org/10.1145/3196709.3196768
[37]
Haotian Li, Yun Wang, and Huamin Qu. 2024. Where Are We So Far? Under-
standing Data Storytelling Tools from the Perspective of Human-AI Collabo-
ration. In Proceedings of the CHI Conference on Human Factors in Computing
Systems (CHI ’24). Association for Computing Machinery, New York, NY, USA,
Article 845, 19 pages. https://doi.org/10.1145/3613904.3642726
[38]
Wenchao Li, Sarah Schöttler, James Scott-Brown, Yun Wang, Siming Chen,
Huamin Qu, and Benjamin Bach. 2023. NetworkNarratives: Data Tours for
Visual Network Exploration and Analysis. In Proceedings of the 2023 CHI
Conference on Human Factors in Computing Systems (Hamburg, Germany)
(CHI ’23). Association for Computing Machinery, New York, NY, USA, Article
172, 15 pages. https://doi.org/10.1145/3544548.3581452
[39]
Jian Liao, Adnan Karim, Shivesh Singh Jadon, Rubaiat Habib Kazi, and Ryo
Suzuki. 2022. RealityTalk: Real-Time Speech-Driven Augmented Presentation
for AR Live Storytelling. In Proceedings of the 35th Annual ACM Symposium on
User Interface Software and Technology (UIST ’22). Association for Computing
Machinery, New York, NY, USA, Article 17, 12 pages. https://doi.org/10.1145/
3526113.3545702
[40]
Kristine Lund. 2007. The Importance of Gaze and Gesture in Interactive
Multimodal Explanation. Language Resources and Evaluation 41 (2007), 289–
303. https://doi.org/10.1007/s10579-007-9048-2
[41]
Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher,
Bongshin Lee, m. c. schraefel, and Jacob O. Wobbrock. 2014. Reducing legacy
bias in gesture elicitation studies. Interactions 21, 3 (may 2014), 40–45. https:
//doi.org/10.1145/2591689
[42] Mark Newman. 2018. Networks. Oxford University Press, Oxford, UK.
[43]
Adegboyega Ojo and Bahareh Heravi. 2018. Patterns in Award Winning Data
Storytelling. Digital Journalism 6, 6 (2018), 693–718. https://doi.org/10.1080/
21670811.2017.1403291
[44]
Dominic Potts, Martynas Dabravalskis, and Steven Houben. 2022. Tangible-
Touch: A Toolkit for Designing Surface-based Gestures for Tangible Interfaces.
In Proceedings of the Sixteenth International Conference on Tangible, Embedded,
and Embodied Interaction (TEI ’22). Association for Computing Machinery, New
York, NY, USA, Article 39, 14 pages. https://doi.org/10.1145/3490149.3502263
[45]
S Price, Y Rogers, M Scaife, D Stanton, and H Neale. 2003. Using ’tangibles’
to promote novel forms of playful learning. Interacting with Computers 15, 2
(2003), 169–185. https://doi.org/10.1016/S0953-5438(03)00006-7
[46]
Nathalie Henry Riche, Christophe Hurter, Nicholas Diakopoulos, and Sheelagh
Carpendale. 2018. Data-Driven Storytelling. CRC Press, Boca Raton, FL.
https://doi.org/10.1201/9781315281575
[47]
Hugo Romat, Caroline Appert, and Emmanuel Pietriga. 2021. Expressive
Authoring of Node-Link Diagrams With Graphies. IEEE Transactions on
Visualization and Computer Graphics 27, 4 (2021), 2329–2340. https://doi.org/
10.1109/TVCG.2019.2950932
[48]
Hans Rosling. 2007. The best stats you’ve ever seen. https://www.youtube.
com/watch?v=hVimVzgtD6w.
[49]
Hans Rosling. 2011. Hans Rosling’s 200 Countries, 200 Years, 4 Minutes.
https://www.youtube.com/watch?v=jbkSRLYSojo.
[50]
Hans Rosling. 2014. Global population growth, box by box. https://www.ted.
com/talks/hans_rosling_global_population_growth_box_by_box.
[51]
Hans Rosling. 2015. How not to be ignorant about the world. https://www.
youtube.com/watch?v=Sm5xF-UYgdg.
[52]
Hans Rosling. 2016. Numbers are boring, people are interesting. https://www.
youtube.com/watch?v=nh94kK05l-M&ab_channel=TEDxTalks.
[53]
Hans Rosling. 2016. Why the world population won’t exceed 11 billion.
https://www.youtube.com/watch?v=2LyzBoHo5EI.
[54]
Sébastien Ruange and Michael J. McGun. 2013. DiAni: Visualizing Dy-
namic Graphs with a Hybrid of Dierence Maps and Animation. IEEE Trans-
actions on Visualization and Computer Graphics 19, 12 (2013), 2556–2565.
https://doi.org/10.1109/TVCG.2013.149
[55]
Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, and Wilmot Li. 2019. Interactive
Body-Driven Graphics for Augmented Video Performance. In Proceedings of
the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19).
Association for Computing Machinery, New York, NY, USA, 1–12. https:
//doi.org/10.1145/3290605.3300852
[56]
Kadek Ananta Satriadi, Barrett Ens, Sarah Goodwin, and Tim Dwyer. 2023.
Active Proxy Dashboard: Binding Physical Referents and Abstract Data Repre-
sentations in Situated Visualization through Tangible Interaction. In Extended
Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
(CHI EA ’23). Association for Computing Machinery, New York, NY, USA,
Article 23, 7 pages. https://doi.org/10.1145/3544549.3585797
[57]
Kadek Ananta Satriadi, Jim Smiley, Barrett Ens, Maxime Cordeil, Tobias Cza-
uderna, Benjamin Lee, Ying Yang, Tim Dwyer, and Bernhard Jenny. 2022.
Tangible Globes for Data Visualisation in Augmented Reality. In Proceedings
of the 2022 CHI Conference on Human Factors in Computing Systems (CHI
’22). Association for Computing Machinery, New York, NY, USA, Article 505,
16 pages. https://doi.org/10.1145/3491102.3517715
[58]
Edward Segel and Jerey Heer. 2010. Narrative Visualization: Telling Stories
with Data. IEEE Transactions on Visualization and Computer Graphics 16, 6
(2010), 1139–1148. https://doi.org/10.1109/TVCG.2010.179
[59]
Barbara Tversky Seokmin Kang and John B. Black. 2015. Coordinating Gesture,
Word, and Diagram: Explanations for Experts and Novices. Spatial Cognition &
Computation 15, 1 (2015), 1–26. https://doi.org/10.1080/13875868.2014.958837
[60]
Orit Shaer and Eva Hornecker. 2010. Tangible User Interfaces: Past, Present,
and Future Directions. Found. Trends Hum.-Comput. Interact. 3, 1–2 (jan 2010),
CHI ’25, April 26–May 01, 2025, Yokohama, Japan Takahira et al.
1–137. https://doi.org/10.1561/1100000026
[61]
Andre Suslik Spritzer, Jeremy Boy, Pierre Dragicevic, Jean-Daniel Fekete, and
Carla Maria Dal Sasso Freitas. 2015. Towards a smooth design process for
static communicative node-link diagrams. Computer Graphics Forum 34, 3
(2015), 461–470. https://doi.org/10.1111/cgf.12658
[62]
Arjun Srinivasan and Matthew Brehmer. 2023. Combining Voice and Ges-
ture for Presenting Data to Remote Audiences. In IEEE VIS 2023 Workshop
on Multimodal Experiences for Remote Communication Around Data Online,
MERCADO’23. IEEE Educational Activities Department, Melbourne, Australia,
1–2. arjun010.github.io/static/papers/mm-presentationmercado23.pdf
[63]
Arjun Srinivasan and John Stasko. 2018. Orko: Facilitating Multimodal In-
teraction for Visual Exploration and Analysis of Networks. IEEE Transac-
tions on Visualization and Computer Graphics 24, 1 (2018), 511–521. https:
//doi.org/10.1109/TVCG.2017.2745219
[64]
Jürgen Streeck. 1993. Gesture as communication I: Its coordination with
gaze and speech. Communication Monographs 60, 4 (1993), 275–299. https:
//doi.org/10.1080/03637759309376314
[65]
Ryo Suzuki, Clement Zheng, Yasuaki Kakehi, Tom Yeh, Ellen Yi-Luen Do,
Mark D. Gross, and Daniel Leithinger. 2019. ShapeBots: Shape-changing
Swarm Robots. In Proceedings of the 32nd Annual ACM Symposium on User
Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). As-
sociation for Computing Machinery, New York, NY, USA, 493–505. https:
//doi.org/10.1145/3332165.3347911
[66]
Faisal Taher, John Hardy, Abhijit Karnik, Christian Weichel, Yvonne Jansen,
Kasper Hornbæk, and Jason Alexander. 2015. Exploring Interactions with
Physically Dynamic Bar Charts. In Proceedings of the 33rd Annual ACM Confer-
ence on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI
’15). Association for Computing Machinery, New York, NY, USA, 3237–3246.
https://doi.org/10.1145/2702123.2702604
[67]
Wai Tong, Chen Zhu-Tian, Meng Xia, Leo Yu-Ho Lo, Linping Yuan, Benjamin
Bach, and Huamin Qu. 2023. Exploring Interactions with Printed Data Vi-
sualizations in Augmented Reality. IEEE Transactions on Visualization and
Computer Graphics 29, 1 (2023), 418–428. https://doi.org/10.1109/TVCG.2022.
3209386
[68]
B. Ullmer and H. Ishii. 2000. Emerging frameworks for tangible user interfaces.
IBM Systems Journal 39, 3.4 (2000), 915–931. https://doi.org/10.1147/sj.393.0915
[69]
Annemiek Veldhuis, Rong-Hao Liang, and Tilde Bekker. 2020. CoDa: Col-
laborative Data Interpretation Through an Interactive Tangible Scatterplot.
In Proceedings of the Fourteenth International Conference on Tangible, Em-
bedded, and Embodied Interaction (Sydney NSW, Australia) (TEI ’20). Asso-
ciation for Computing Machinery, New York, NY, USA, 323–336. https:
//doi.org/10.1145/3374920.3374934
[70]
Xmind Ltd. 2014. Xmind: A Full-Featured Mind Mapping and Brainstorming
Tool. https://xmind.app. Version 8 Update 3, Accessed: 2024-09-11.
[71]
Yaying Zhang, Rongkai Shi, and Hai-Ning Liang. 2024. Designing Stick-Based
Extended Reality Controllers: A Participatory Approach. In Extended Abstracts
of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA
’24). Association for Computing Machinery, New York, NY, USA, Article 103,
6 pages. https://doi.org/10.1145/3613905.3650925
[72]
Zhenpeng Zhao and Niklas Elmqvist. 2022. DataTV: Streaming Data Videos
for Storytelling. arXiv:2210.08175 [cs.HC] https://arxiv.org/abs/2210.08175
[73]
Zoom Video Communications, Inc. 2024. Zoom: Video Conferencing, Web
Conferencing, Webinars, Screen Sharing. https://zoom.us. Version 5.15.2,
Accessed: 2024-11-29.